98 research outputs found

    CS 643-101: Cloud Computing

    Get PDF

    CS 643: Cloud Computing

    Get PDF

    CS 643-102: Cloud Computing

    Get PDF

    CS 643-101: Cloud Computing

    Get PDF

    CS 643-850: Cloud Computing

    Get PDF

    Enabling Social Applications via Decentralized Social Data Management

    Full text link
    An unprecedented information wealth produced by online social networks, further augmented by location/collocation data, is currently fragmented across different proprietary services. Combined, it can accurately represent the social world and enable novel socially-aware applications. We present Prometheus, a socially-aware peer-to-peer service that collects social information from multiple sources into a multigraph managed in a decentralized fashion on user-contributed nodes, and exposes it through an interface implementing non-trivial social inferences while complying with user-defined access policies. Simulations and experiments on PlanetLab with emulated application workloads show the system exhibits good end-to-end response time, low communication overhead and resilience to malicious attacks.Comment: 27 pages, single ACM column, 9 figures, accepted in Special Issue of Foundations of Social Computing, ACM Transactions on Internet Technolog

    Concept Matching: Clustering-based Federated Continual Learning

    Full text link
    Federated Continual Learning (FCL) has emerged as a promising paradigm that combines Federated Learning (FL) and Continual Learning (CL). To achieve good model accuracy, FCL needs to tackle catastrophic forgetting due to concept drift over time in CL, and to overcome the potential interference among clients in FL. We propose Concept Matching (CM), a clustering-based framework for FCL to address these challenges. The CM framework groups the client models into concept model clusters, and then builds different global models to capture different concepts in FL over time. In each round, the server sends the global concept models to the clients. To avoid catastrophic forgetting, each client selects the concept model best-matching the concept of the current data for further fine-tuning. To avoid interference among client models with different concepts, the server clusters the models representing the same concept, aggregates the model weights in each cluster, and updates the global concept model with the cluster model of the same concept. Since the server does not know the concepts captured by the aggregated cluster models, we propose a novel server concept matching algorithm that effectively updates a global concept model with a matching cluster model. The CM framework provides flexibility to use different clustering, aggregation, and concept matching algorithms. The evaluation demonstrates that CM outperforms state-of-the-art systems and scales well with the number of clients and the model size

    Complement Sparsification: Low-Overhead Model Pruning for Federated Learning

    Full text link
    Federated Learning (FL) is a privacy-preserving distributed deep learning paradigm that involves substantial communication and computation effort, which is a problem for resource-constrained mobile and IoT devices. Model pruning/sparsification develops sparse models that could solve this problem, but existing sparsification solutions cannot satisfy at the same time the requirements for low bidirectional communication overhead between the server and the clients, low computation overhead at the clients, and good model accuracy, under the FL assumption that the server does not have access to raw data to fine-tune the pruned models. We propose Complement Sparsification (CS), a pruning mechanism that satisfies all these requirements through a complementary and collaborative pruning done at the server and the clients. At each round, CS creates a global sparse model that contains the weights that capture the general data distribution of all clients, while the clients create local sparse models with the weights pruned from the global model to capture the local trends. For improved model performance, these two types of complementary sparse models are aggregated into a dense model in each round, which is subsequently pruned in an iterative process. CS requires little computation overhead on the top of vanilla FL for both the server and the clients. We demonstrate that CS is an approximation of vanilla FL and, thus, its models perform well. We evaluate CS experimentally with two popular FL benchmark datasets. CS achieves substantial reduction in bidirectional communication, while achieving performance comparable with vanilla FL. In addition, CS outperforms baseline pruning mechanisms for FL

    To be Tough or Soft: Measuring the Impact of Counter-Ad-blocking Strategies on User Engagement

    Full text link
    The fast growing ad-blocker usage results in large revenue decrease for ad-supported online websites. Facing this problem, many online publishers choose either to cooperate with ad-blocker software companies to show acceptable ads or to build a wall that requires users to whitelist the site for content access. However, there is lack of studies on the impact of these two counter-ad-blocking strategies on user behaviors. To address this issue, we conduct a randomized field experiment on the website of Forbes Media, a major US media publisher. The ad-blocker users are divided into a treatment group, which receives the wall strategy, and a control group, which receives the acceptable ads strategy. We utilize the difference-in-differences method to estimate the causal effects. Our study shows that the wall strategy has an overall negative impact on user engagements. However, it has no statistically significant effect on high-engaged users as they would view the pages no matter what strategy is used. It has a big impact on low-engaged users, who have no loyalty to the site. Our study also shows that revisiting behavior decreases over time, but the ratio of session whitelisting increases over time as the remaining users have relatively high loyalty and high engagement. The paper concludes with discussions of managerial insights for publishers when determining counter-ad-blocking strategies.Comment: In Proceedings of The Web Conference 2020 (WWW 20
    • …
    corecore